Domain adaptation has been vastly investigated in computer vision but still requires access to target images at train time, which might be intractable in some conditions, especially for long-tail samples. In this paper, we propose the task of `Prompt-driven Zero-shot Domain Adaptation', where we adapt a model trained on a source domain using only a general textual description of the target domain, i.e., a prompt. First, we leverage a pretrained contrastive vision-language model (CLIP) to optimize affine transformations of source features, bringing them closer to target text embeddings, while preserving their content and semantics. Second, we show that augmented features can be used to perform zero-shot domain adaptation for semantic segmentation. Experiments demonstrate that our method significantly outperforms CLIP-based style transfer baselines on several datasets for the downstream task at hand. Our prompt-driven approach even outperforms one-shot unsupervised domain adaptation on some datasets, and gives comparable results on others. The code is available at https://github.com/astra-vision/PODA.
translated by 谷歌翻译
In the literature, 3D reconstruction from 2D image has been extensively addressed but often still requires geometrical supervision. In this paper, we propose SceneRF, a self-supervised monocular scene reconstruction method with neural radiance fields (NeRF) learned from multiple image sequences with pose. To improve geometry prediction, we introduce new geometry constraints and a novel probabilistic sampling strategy that efficiently update radiance fields. As the latter are conditioned on a single frame, scene reconstruction is achieved from the fusion of multiple synthesized novel depth views. This is enabled by our spherical-decoder, which allows hallucination beyond the input frame field of view. Thorough experiments demonstrate that we outperform all baselines on all metrics for novel depth views synthesis and scene reconstruction. Our code is available at https://astra-vision.github.io/SceneRF.
translated by 谷歌翻译
多任务学习最近已成为对复杂场景的全面理解的有前途的解决方案。不仅具有适当设计的记忆效率,多任务模型都可以跨任务交换互补信号。在这项工作中,我们共同解决了2D语义分割,以及两个与几何相关的任务,即密集的深度,表面正常估计以及边缘估计,显示了它们对室内和室外数据集的好处。我们提出了一种新颖的多任务学习体系结构,该体系结构通过相关引导的注意力和自我注意力来利用配对的交叉任务交换,以增强所有任务的平均表示学习。我们考虑了三个多任务设置的广泛实验,与合成基准和真实基准中的竞争基准相比,我们的提案的好处。我们还将方法扩展到新型的多任务无监督域的适应设置。我们的代码可在https://github.com/cv-rits/densemtl上找到。
translated by 谷歌翻译
Monoscene提出了3D语义场景完成(SSC)框架,其中从单眼RGB图像推断出场景的密集几何和语义。与SSC文献不同,依赖于2.5或3D输入,我们解决了2D到3D场景重建的复杂问题,同时联合推断了其语义。我们的框架依赖于由光学系统启发的新型2D-3D功能投影的连续2D和3D UNETS,并在强制执行时期 - 语义一致性之前引入3D上下文关系。随着建筑贡献,我们介绍了新的全球场景和本地截肢损失。实验表明,我们在所有指标和数据集上表达了文献,同时甚至在相机视野之外的幻觉风景。我们的代码和培训的型号可在https://github.com/cv-rits/monoscene获得
translated by 谷歌翻译
大多数图像到图像翻译方法需要大量的培训图像,这限制了他们的适用性。我们提出了清单:几次拍摄图像转换的框架,它仅从几个图像中了解目标域的上下文感知表示。要强制执行功能一致性,我们的框架会在源和代理锚域之间的样式歧管(假设由大量图像组成)之间。学习的歧管通过基于贴片的对策和特征统计对准损耗来插入并使少量射击目标域变形。所有这些组件在单端到端循环期间同时培训。除了普通的少量镜头翻译任务之外,我们的方法可以在单个示例图像上调节以再现其特定样式。广泛的实验证明了清单对多项任务的功效,优于所有度量的最先进,并且在基于一般和示例的情景中表现出最先进的。我们的代码将是开源的。
translated by 谷歌翻译
图像到图像翻译(I2i)网络在目标域中与物理相关现象的存在(例如遮挡,雾等)存在纠缠效应,从而完全降低了翻译质量,可控性和可变性。在本文中,我们基于简单物理模型的收集,并提出了一种综合方法,可以通过物理模型来指导目标图像中的视觉特征,从而指导过程,该模型呈现一些目标特征,并学习其余的特征。因为它允许明确和可解释的输出,所以我们的物理模型(在目标上最佳回归)允许以可控的方式生成看不见的方案。我们还扩展了我们的框架,显示出多功能性与神经引导的分离。结果表明,在几种挑战性的图像翻译方案中,我们的分离策略在定性和定量上大大提高了性能。
translated by 谷歌翻译
COMOGAN是一个连续的GAN,依赖于功能歧管上目标数据的无监督重组。就此而言,我们引入了一个新的功能实例归一化层和剩余机制,该机制将图像含量从目标歧管上的位置分开。我们依靠天真的物理启发的模型来指导培训,同时允许私人模型/翻译功能。COMOGAN可与任何GAN主链一起使用,并允许新型的图像翻译类型,例如循环图像翻译(如及时解散的生成)或分离的线性翻译。在所有数据集上,它都优于文献。我们的代码可在http://github.com/cv-rits/comogan上找到。
translated by 谷歌翻译
当标签稀缺时,域的适应性是使学习能够学习的重要任务。尽管大多数作品仅着眼于图像模式,但有许多重要的多模式数据集。为了利用多模式的域适应性,我们提出了跨模式学习,在这种学习中,我们通过相互模仿在两种模式的预测之间执行一致性。我们限制了我们的网络,以对未标记的目标域数据进行正确预测,并在标记的数据和跨模式的一致预测中进行预测。在无监督和半监督的域适应设置中进行的实验证明了这种新型域适应策略的有效性。具体而言,我们评估了从2D图像,3D点云或两者都从3D语义分割的任务进行评估。我们利用最近的驾驶数据集生产各种域名适应场景,包括场景布局,照明,传感器设置和天气以及合成到现实的设置的变化。我们的方法在所有适应方案上都显着改善了以前的单模式适应基线。我们的代码可在https://github.com/valeoai/xmuda_journal上公开获取
translated by 谷歌翻译
Advances in computer vision and machine learning techniques have led to significant development in 2D and 3D human pose estimation from RGB cameras, LiDAR, and radars. However, human pose estimation from images is adversely affected by occlusion and lighting, which are common in many scenarios of interest. Radar and LiDAR technologies, on the other hand, need specialized hardware that is expensive and power-intensive. Furthermore, placing these sensors in non-public areas raises significant privacy concerns. To address these limitations, recent research has explored the use of WiFi antennas (1D sensors) for body segmentation and key-point body detection. This paper further expands on the use of the WiFi signal in combination with deep learning architectures, commonly used in computer vision, to estimate dense human pose correspondence. We developed a deep neural network that maps the phase and amplitude of WiFi signals to UV coordinates within 24 human regions. The results of the study reveal that our model can estimate the dense pose of multiple subjects, with comparable performance to image-based approaches, by utilizing WiFi signals as the only input. This paves the way for low-cost, broadly accessible, and privacy-preserving algorithms for human sensing.
translated by 谷歌翻译
Due to the environmental impacts caused by the construction industry, repurposing existing buildings and making them more energy-efficient has become a high-priority issue. However, a legitimate concern of land developers is associated with the buildings' state of conservation. For that reason, infrared thermography has been used as a powerful tool to characterize these buildings' state of conservation by detecting pathologies, such as cracks and humidity. Thermal cameras detect the radiation emitted by any material and translate it into temperature-color-coded images. Abnormal temperature changes may indicate the presence of pathologies, however, reading thermal images might not be quite simple. This research project aims to combine infrared thermography and machine learning (ML) to help stakeholders determine the viability of reusing existing buildings by identifying their pathologies and defects more efficiently and accurately. In this particular phase of this research project, we've used an image classification machine learning model of Convolutional Neural Networks (DCNN) to differentiate three levels of cracks in one particular building. The model's accuracy was compared between the MSX and thermal images acquired from two distinct thermal cameras and fused images (formed through multisource information) to test the influence of the input data and network on the detection results.
translated by 谷歌翻译